11 research outputs found
Dancing Between Success and Failure: Edit-level Simplification Evaluation using SALSA
Large language models (e.g., GPT-4) are uniquely capable of producing highly
rated text simplification, yet current human evaluation methods fail to provide
a clear understanding of systems' specific strengths and weaknesses. To address
this limitation, we introduce SALSA, an edit-based human annotation framework
that enables holistic and fine-grained text simplification evaluation. We
develop twenty one linguistically grounded edit types, covering the full
spectrum of success and failure across dimensions of conceptual, syntactic and
lexical simplicity. Using SALSA, we collect 19K edit annotations on 840
simplifications, revealing discrepancies in the distribution of simplification
strategies performed by fine-tuned models, prompted LLMs and humans, and find
GPT-3.5 performs more quality edits than humans, but still exhibits frequent
errors. Using our fine-grained annotations, we develop LENS-SALSA, a
reference-free automatic simplification metric, trained to predict sentence-
and word-level quality simultaneously. Additionally, we introduce word-level
quality estimation for simplification and report promising baseline results.
Our data, new metric, and annotation toolkit are available at
https://salsa-eval.com.Comment: Accepted to EMNLP 202
Multi-task Pairwise Neural Ranking for Hashtag Segmentation
Hashtags are often employed on social media and beyond to add metadata to a
textual utterance with the goal of increasing discoverability, aiding search,
or providing additional semantics. However, the semantic content of hashtags is
not straightforward to infer as these represent ad-hoc conventions which
frequently include multiple words joined together and can include abbreviations
and unorthodox spellings. We build a dataset of 12,594 hashtags split into
individual segments and propose a set of approaches for hashtag segmentation by
framing it as a pairwise ranking problem between candidate segmentations. Our
novel neural approaches demonstrate 24.6% error reduction in hashtag
segmentation accuracy compared to the current state-of-the-art method. Finally,
we demonstrate that a deeper understanding of hashtag semantics obtained
through segmentation is useful for downstream applications such as sentiment
analysis, for which we achieved a 2.6% increase in average recall on the
SemEval 2017 sentiment analysis dataset.Comment: 12 pages, ACL 201
Controllable Text Simplification with Explicit Paraphrasing
Text Simplification improves the readability of sentences through several
rewriting transformations, such as lexical paraphrasing, deletion, and
splitting. Current simplification systems are predominantly
sequence-to-sequence models that are trained end-to-end to perform all these
operations simultaneously. However, such systems limit themselves to mostly
deleting words and cannot easily adapt to the requirements of different target
audiences. In this paper, we propose a novel hybrid approach that leverages
linguistically-motivated rules for splitting and deletion, and couples them
with a neural paraphrasing model to produce varied rewriting styles. We
introduce a new data augmentation method to improve the paraphrasing capability
of our model. Through automatic and manual evaluations, we show that our
proposed model establishes a new state-of-the-art for the task, paraphrasing
more often than the existing systems, and can control the degree of each
simplification operation applied to the input texts
LENS: A Learnable Evaluation Metric for Text Simplification
Training learnable metrics using modern language models has recently emerged
as a promising method for the automatic evaluation of machine translation.
However, existing human evaluation datasets for text simplification have
limited annotations that are based on unitary or outdated models, making them
unsuitable for this approach. To address these issues, we introduce the
SimpEval corpus that contains: SimpEval_past, comprising 12K human ratings on
2.4K simplifications of 24 past systems, and SimpEval_2022, a challenging
simplification benchmark consisting of over 1K human ratings of 360
simplifications including GPT-3.5 generated text. Training on SimpEval, we
present LENS, a Learnable Evaluation Metric for Text Simplification. Extensive
empirical results show that LENS correlates much better with human judgment
than existing metrics, paving the way for future progress in the evaluation of
text simplification. We also introduce Rank and Rate, a human evaluation
framework that rates simplifications from several models in a list-wise manner
using an interactive interface, which ensures both consistency and accuracy in
the evaluation process and is used to create the SimpEval datasets.Comment: Accepted at ACL 202
Controllable text simplification with explicit paraphrasing
Text Simplification improves the readability of sentences through several rewriting transformations, such as lexical paraphrasing, deletion, and splitting. Current simplification systems are predominantly sequence-to-sequence models that are trained end-to-end to perform all these operations simultaneously. However, such systems limit themselves to mostly deleting words and cannot easily adapt to the requirements of different target audiences. In this paper, we propose a novel hybrid approach that leverages linguistically-motivated rules for splitting and deletion, and couples them with a neural paraphrasing model to produce varied rewriting styles. We introduce a new data augmentation method to improve the paraphrasing capability of our model. Through automatic and manual evaluations, we show that our proposed model establishes a new state-of-the-art for the task, paraphrasing more often than the existing systems, and can control the degree of each simplification operation applied to the input texts